翻訳と辞書
Words near each other
・ Generalized Lagrangian mean
・ Generalized least squares
・ Generalized lentiginosis
・ Generalized lifting
・ Generalized linear array model
・ Generalized linear mixed model
・ Generalized linear model
・ Generalized logistic distribution
・ Generalized Lotka–Volterra equation
・ Generalized lymphadenopathy
・ Generalized map
・ Generalized Maxwell model
・ Generalized mean
・ Generalized method of moments
・ Generalized minimal residual method
Generalized minimum-distance decoding
・ Generalized Multi-Protocol Label Switching
・ Generalized multidimensional scaling
・ Generalized multivariate log-gamma distribution
・ Generalized Music Plug-in Interface
・ Generalized Newtonian fluid
・ Generalized nondeterministic finite automaton
・ Generalized normal distribution
・ Generalized other
・ Generalized Ozaki cost function
・ Generalized p-value
・ Generalized Pareto distribution
・ Generalized permutation matrix
・ Generalized Petersen graph
・ Generalized phrase structure grammar


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Generalized minimum-distance decoding : ウィキペディア英語版
Generalized minimum-distance decoding
In coding theory, generalized minimum-distance (GMD) decoding provides an efficient algorithm for decoding concatenated codes, which is based on using an errors-and-erasures decoder for the outer code.
A naive decoding algorithm for concatenated codes can not be an optimal way of decoding because it does not take into account the information that maximum likelihood decoding (MLD) gives. In other words, in the naive algorithm, inner received codewords are treated the same regardless of the difference between their hamming distances. Intuitively, the outer decoder should place higher confidence in symbols whose inner encodings are close to the received word. David Forney in 1966 devised a better algorithm called generalized minimum distance (GMD) decoding which makes use of those information better. This method is achieved by measuring confidence of each received codeword, and erasing symbols whose confidence is below a desired value. And GMD decoding algorithm was one of the first examples of soft-decision decoders. We will present three versions of the GMD decoding algorithm. The first two will be randomized algorithms while the last one will be a deterministic algorithm.
==Setup==
# Hamming distance : Given two vectors u, v\in\sum^n the Hamming distance between u and v, denoted by \Delta(u, v), is defined to be the number of positions in which u and v differ.
# Minimum distance : Let C\subseteq\sum^n be a code. The minimum distance of code C is defined to be d = \min where c_1 \ne c_2 \in C
# Code concatenation : Given m = (m_1, \ldots, m_K) \in ()^K, consider two codes which we call outer code and inner code C_\text = ()^K \rightarrow ()^N, C_\text : ()^k \rightarrow ()^n, and their distances are D and d. A concatenated code can be achieved by C_\text \circ C_\text (m) = (C_\text (C_\text (m)_1), \ldots, C_\text (C_\text (m)_N )) where C_\text(m) = ((C_\text (m)_1, \ldots, (m)_N )). Finally we will take C_\text to be RS code, which has an errors and erasure decoder, and K = O(\log), which in turn implies that MLD on the inner code will be poly(N) time.
# Maximum likelihood decoding(MLD) : MLD is a decoding method for error correcting codes, which outputs the codeword closest to the received word in Hamming distance. The MLD function denoted by D_ : \sum^n \rightarrow C is defined as follows. For every y\in\sum_n, D_(y) = \arg \min_\Delta(c, y).
# Probability density function : A probability distribution \Pr() on a sample space S is a mapping from events of S to real numbers such that \Pr() \ge 0 for any event A, \Pr() = 1, and \Pr(\cup B ) = \Pr() + \Pr() for any two mutually exclusive events A and B
# Expected value : The expected value of a discrete random variable X is \mathbb = \sum_x\Pr(= x ).

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Generalized minimum-distance decoding」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.